124 research outputs found

    Streaming the Web: Reasoning over dynamic data.

    Get PDF
    In the last few years a new research area, called stream reasoning, emerged to bridge the gap between reasoning and stream processing. While current reasoning approaches are designed to work on mainly static data, the Web is, on the other hand, extremely dynamic: information is frequently changed and updated, and new data is continuously generated from a huge number of sources, often at high rate. In other words, fresh information is constantly made available in the form of streams of new data and updates. Despite some promising investigations in the area, stream reasoning is still in its infancy, both from the perspective of models and theories development, and from the perspective of systems and tools design and implementation. The aim of this paper is threefold: (i) we identify the requirements coming from different application scenarios, and we isolate the problems they pose; (ii) we survey existing approaches and proposals in the area of stream reasoning, highlighting their strengths and limitations; (iii) we draw a research agenda to guide the future research and development of stream reasoning. In doing so, we also analyze related research fields to extract algorithms, models, techniques, and solutions that could be useful in the area of stream reasoning. © 2014 Elsevier B.V. All rights reserved

    DynamiTE: Parallel Materialization of Dynamic RDF Data

    Get PDF
    One of the main advantages of using semantically annotated data is that machines can reason on it, deriving implicit knowledge from explicit information. In this context, materializing every possible implicit derivation from a given input can be computationally expensive, especially when considering large data volumes. Most of the solutions that address this problem rely on the assumption that the information is static, i.e., that it does not change, or changes very infrequently. However, the Web is extremely dynamic: online newspapers, blogs, social networks, etc., are frequently changed so that outdated information is removed and replaced with fresh data. This demands for a materialization that is not only scalable, but also reactive to changes. In this paper, we consider the problem of incremental materialization, that is, how to update the materialized derivations when new data is added or removed. To this purpose, we consider the ρdf RDFS fragment [12], and present a parallel system that implements a number of algorithms to quickly recalculate the derivation. In case new data is added, our system uses a parallel version of the well-known semi-naive evaluation of Datalog. In case of removals, we have implemented two algorithms, one based on previous theoretical work, and another one that is more efficient since it does not require a complete scan of the input. We have evaluated the performance using a prototype system called DynamiTE, which organizes the knowledge bases with a number of indices to facilitate the query process and exploits parallelism to improve the performance. The results show that our methods are indeed capable to recalculate the derivation in a short time, opening the door to reasoning on much more dynamic data than is currently possible. © 2013 Springer-Verlag

    Expressive Stream Reasoning with Laser

    Full text link
    An increasing number of use cases require a timely extraction of non-trivial knowledge from semantically annotated data streams, especially on the Web and for the Internet of Things (IoT). Often, this extraction requires expressive reasoning, which is challenging to compute on large streams. We propose Laser, a new reasoner that supports a pragmatic, non-trivial fragment of the logic LARS which extends Answer Set Programming (ASP) for streams. At its core, Laser implements a novel evaluation procedure which annotates formulae to avoid the re-computation of duplicates at multiple time points. This procedure, combined with a judicious implementation of the LARS operators, is responsible for significantly better runtimes than the ones of other state-of-the-art systems like C-SPARQL and CQELS, or an implementation of LARS which runs on the ASP solver Clingo. This enables the application of expressive logic-based reasoning to large streams and opens the door to a wider range of stream reasoning use cases.Comment: 19 pages, 5 figures. Extended version of accepted paper at ISWC 201

    A novel method to optimize autologous adipose tissue recovery with extracellular matrix preservation

    Get PDF
    This work aims to characterize a new method to recover low-manipulated human adipose tissue, enriched with adipose tissue-derived mesenchymal stem cells (ATD-MSCs) for autologous use in regenerative medicine applications. Lipoaspirated fat collected from patients was processed through Lipocell, a Class II-a medical device for dialysis of adipose tissue, by varying filter sizes and washing solutions. ATD-MSC yield was measured with flow cytometry after stromal vascular fraction (SVF) isolation in fresh and cultured samples. Purification from oil and blood was measured after centrifugationwith spectrophotometer analysis. Extracellularmatrix preservationwas assessed through hematoxylin and eosin (H&E) staining and biochemical assay for total collagen, type-2 collagen, and glycosaminoglycans (GAGs) quantification. Flow cytometry showed a two-fold increase of ATD-MSC yield in treated samples in comparisonwith untreated lipoaspirate; no differenceswhere reportedwhen varying filter size. The association of dialysis and washing thoroughly removed blood and oil from samples. Tissue architecture and extracellular matrix integrity were unaltered after Lipocell processing. Dialysis procedure associated with Ringer's lactate preserves the proliferation ability of ATD-MSCs in cell culture. The characterization of the product showed that Lipocell is an efficient method for purifying the tissue from undesired byproducts and preserving ATD-MSC vitality and extracellular matrix (ECM) integrity, resulting in a promising tool for regenerative medicine applications

    A novel method to optimize autologous adipose tissue recovery with extracellular matrix preservation

    Get PDF
    This work aims to characterize a new method to recover low-manipulated human adipose tissue, enriched with adipose tissue-derived mesenchymal stem cells (ATD-MSCs) for autologous use in regenerative medicine applications. Lipoaspirated fat collected from patients was processed through Lipocell, a Class II-a medical device for dialysis of adipose tissue, by varying filter sizes and washing solutions. ATD-MSC yield was measured with flow cytometry after stromal vascular fraction (SVF) isolation in fresh and cultured samples. Purification from oil and blood was measured after centrifugation with spectrophotometer analysis. Extracellular matrix preservation was assessed through hematoxylin and eosin (H&E) staining and biochemical assay for total collagen, type-2 collagen, and glycosaminoglycans (GAGs) quantification. Flow cytometry showed a two-fold increase of ATD-MSC yield in treated samples in comparison with untreated lipoaspirate; no differences where reported when varying filter size. The association of dialysis and washing thoroughly removed blood and oil from samples. Tissue architecture and extracellular matrix integrity were unaltered after Lipocell processing. Dialysis procedure associated with Ringer’s lactate preserves the proliferation ability of ATD-MSCs in cell culture. The characterization of the product showed that Lipocell is an efficient method for purifying the tissue from undesired byproducts and preserving ATD-MSC vitality and extracellular matrix (ECM) integrity, resulting in a promising tool for regenerative medicine applications

    A unifying model for distributed data-intensive systems

    No full text
    Modern applications handle increasingly larger volumes of data, generated at an unprecedented and constantly growing rate. They introduce challenges that are radically transforming the research fields that gravitate around data management and processing, resulting in a blooming of distributed data-intensive systems. Each such system comes with its specific assumptions, data and processing model, design choices, implementation strategies, and guarantees. Yet, the problems data-intensive systems face and the solutions they propose are frequently overlapping. This tutorial presents a unifying model for data-intensive systems that dissects them into core building blocks, enabling a precise and unambiguous description and a detailed comparison. From the model, we derive a list of classification criteria and we use them to build a taxonomy of state-of-the-art systems. The tutorial offers a global view of the vast research field of data-intensive systems, highlighting interesting observations on the current state of things, and suggesting promising research directions
    corecore